Healthy Brain Network EEG Data Download Page

Release 1.1 - n = 603

Subject data are grouped into individual folders (.tar.gz). Each .tar.gz file is approximately 2GB in size. Be sure to review Release Notes and Phenotypic Information prior to using this data. Use the checkboxes to select which subjects you would like to download.

Site-Subject Information

For each subject the EEG data may be collected in one of our three different sites. The information related to each subject and the site where the subject’s EEG data collected is provided in a spreadsheet (Staten Island=1 , Mobile Van= 2, Midtown= 3) Click here to access Subject-Site_R1_1.xlsx

AWS and Cyberduck

MRI and EEG data, organized into folders by participant, may also be accessed through an Amazon Web Services (AWS) S3 bucket.

Each file in the S3 bucket can only be accessed using HTTP (i.e., no ftp or scp ). You can obtain a URL for each desired file and then download it using an HTTP client such as a web browser, wget, or curl. Each file can only be accessed using its literal name- wildcards will not work.

There are file transfer programs that can handle S3 natively and will allow you to navigate through the data using a file browser. Cyberduck is one such program that works with Windows and Mac OS X. Cyberduck also has a command line version that works with Windows, Mac OS X, and Linux. Instructions for using Cyberduck are as follows:

  1. Open Cyberduck and click on Open Connection.
  2. Set the application protocol in the dropdown menu to S3 (Amazon Simple Storage Service).
  3. Set the server to s3.amazonaws.com.
  4. Check the box labelled Anonymous Login.
  5. Expand the More Options tab
    • To access compressed files (.tar.gz), set Path to fcp-indi/data/Archives/HBN
    • To access uncompressed files, set Path to fcp-indi/data/Projects/HBN
  6. Click Connect.

The end result should appear as follows:

cyberduck

Release 1 Datasets (n = 474)

Set 1


































































































Set 2


































































































Set 3


































































































Set 4


































































































Set 5

































































































Additional datasets (n = 129)

Set 1





























Set 2





























Set 3





























Set 4





























Set 5




























Datasets deleted from release 1 (n = 26)